Stochastic control

Stochastic control is a subfield of control theory which deals with the existence of uncertainty in the data. The designer assumes, in a Bayesian probability-driven fashion, that a random noise with known probability distribution affects the state evolution and the observation of the controllers. Stochastic control aims to design the optimal controller that performs the desired control task with minimum average cost despite the presence of these noises.[1]

An extremely well studied formulation in stochastic control is that of linear-quadratic-Gaussian problem. Here the model is linear, and the objective function is the expected value of a quadratic form, and the additive disturbances are distributed in a Gaussian manner. A basic result for discrete time centralized systems is the certainty equivalence property:[2] that the optimal control solution in this case is the same as would be obtained in the absence of the additive disturbances. This property is applicable to all systems that are merely linear and quadratic (LQ), and the Gaussian assumption allows for the optimal control laws, that are based on the certainty-equivalence property, to be linear functions of the observations of the controllers.

This property fails to hold for decentralized control, as was demonstrated by Witsenhausen in the celebrated Witsenhausen's counterexample.

Any deviation from the above assumptions—a nonlinear state equation, a non-quadratic objective function, or noise in the multiplicative parameters of the model—would cause the certainty equivalence property not to hold. In the discrete-time case with uncertainty about the parameter values in the transition matrix and/or the control response matrix of the state equation, but still with a linear state equation and quadratic objective function, a matrix Riccati equation can still be obtained for iterating to each period's solution.[3][2]ch.13 The discrete-time case of a non-quadratic loss function but only additive disturbances can also be handled, albeit with more complications.[4]

References

  1. ^ Definition from Answers.com
  2. ^ a b Chow, Gregory P., Analysis and Control of Dynamic Economic Systems, Wiley, 1976.
  3. ^ Turnovsky, Stephen, "Optimal stabilization policies for stochastic linear systems: The case of correlated multiplicative and additive disturbances," Review of Economic Studies 43(1), 1976, 191-94.
  4. ^ Mitchell, Douglas W., "Tractable risk sensitive control bassed on approximate expected utility," Economic Modelling, April 1990, 161-164.

See also